Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task

نویسندگان

  • Ken Kinjo
  • Eiji Uchibe
  • Kenji Doya
چکیده

Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Navigation of a Mobile Robot Using Virtual Potential Field and Artificial Neural Network

Mobile robot navigation is one of the basic problems in robotics. In this paper, a new approach is proposed for autonomous mobile robot navigation in an unknown environment. The proposed approach is based on learning virtual parallel paths that propel the mobile robot toward the track using a multi-layer, feed-forward neural network. For training, a human operator navigates the mobile robot in ...

متن کامل

Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)

In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption o...

متن کامل

A New Method of Mobile Robot Navigation: Shortest Null Space

In this paper, a new method was proposed for the navigation of a mobile robot in an unknown dynamic environment. The robot could detect only a limited radius of its surrounding with its sensors and it went on the shortest null space (SNS) toward the goal. In the case of no obstacle, SNS was a direct path from the robot to goal; however, in the presence of obstacles, SNS was a space around the r...

متن کامل

A New Method of Mobile Robot Navigation: Shortest Null Space

In this paper, a new method was proposed for the navigation of a mobile robot in an unknown dynamic environment. The robot could detect only a limited radius of its surrounding with its sensors and it went on the shortest null space (SNS) toward the goal. In the case of no obstacle, SNS was a direct path from the robot to goal; however, in the presence of obstacles, SNS was a space around the r...

متن کامل

A Q-learning Based Continuous Tuning of Fuzzy Wall Tracking

A simple easy to implement algorithm is proposed to address wall tracking task of an autonomous robot. The robot should navigate in unknown environments, find the nearest wall, and track it solely based on locally sensed data. The proposed method benefits from coupling fuzzy logic and Q-learning to meet requirements of autonomous navigations. Fuzzy if-then rules provide a reliable decision maki...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 7  شماره 

صفحات  -

تاریخ انتشار 2013